201 research outputs found

    On Temporal Graph Exploration

    Full text link
    A temporal graph is a graph in which the edge set can change from step to step. The temporal graph exploration problem TEXP is the problem of computing a foremost exploration schedule for a temporal graph, i.e., a temporal walk that starts at a given start node, visits all nodes of the graph, and has the smallest arrival time. In the first part of the paper, we consider only temporal graphs that are connected at each step. For such temporal graphs with nn nodes, we show that it is NP-hard to approximate TEXP with ratio O(n1ϵ)O(n^{1-\epsilon}) for any ϵ>0\epsilon>0. We also provide an explicit construction of temporal graphs that require Θ(n2)\Theta(n^2) steps to be explored. We then consider TEXP under the assumption that the underlying graph (i.e. the graph that contains all edges that are present in the temporal graph in at least one step) belongs to a specific class of graphs. Among other results, we show that temporal graphs can be explored in O(n1.5k2logn)O(n^{1.5} k^2 \log n) steps if the underlying graph has treewidth kk and in O(nlog3n)O(n \log^3 n) steps if the underlying graph is a 2×n2\times n grid. In the second part of the paper, we replace the connectedness assumption by a weaker assumption and show that mm-edge temporal graphs with regularly present edges and with random edges can always be explored in O(m)O(m) steps and O(mlogn)O(m \log n) steps with high probability, respectively. We finally show that the latter result can be used to obtain a distributed algorithm for the gossiping problem.Comment: This is an extended version of an ICALP 2015 pape

    Scheduling with Explorable Uncertainty

    Get PDF
    We introduce a novel model for scheduling with explorable uncertainty. In this model, the processing time of a job can potentially be reduced (by an a priori unknown amount) by testing the job. Testing a job j takes one unit of time and may reduce its processing time from the given upper limit p\u27_j (which is the time taken to execute the job if it is not tested) to any value between 0 and p\u27_j. This setting is motivated e.g. by applications where a code optimizer can be run on a job before executing it. We consider the objective of minimizing the sum of completion times on a single machine. All jobs are available from the start, but the reduction in their processing times as a result of testing is unknown, making this an online problem that is amenable to competitive analysis. The need to balance the time spent on tests and the time spent on job executions adds a novel flavor to the problem. We give the first and nearly tight lower and upper bounds on the competitive ratio for deterministic and randomized algorithms. We also show that minimizing the makespan is a considerably easier problem for which we give optimal deterministic and randomized online algorithms

    Wavelength Conversion in All-Optical Networks with Shortest-Path Routing

    Get PDF
    We consider all-optical networks with shortest-path routing that use wavelength-division multiplexing and employ wavelength conversion at specific nodes in order to maximize their capacity usage. We present efficient algorithms for deciding whether a placement of wavelength converters allows the network to run at maximum capacity, and for finding an optimal wavelength assignment when such a placement of converters is known. Our algorithms apply to both undirected and directed networks. Furthermore, we show that the problem of designing such networks, i.e., finding an optimal placement of converters, is MAX SNP-hard in both the undirected and the directed case. Finally, we give a linear-time algorithm for finding an optimal placement of converters in undirected triangle-free networks, and show that the problem remains NP-hard in bidirected triangle-free planar network

    Learning-Augmented Query Policies for Minimum Spanning Tree with Uncertainty

    Get PDF
    We study how to utilize (possibly erroneous) predictions in a model for computing under uncertainty in which an algorithm can query unknown data. Our aim is to minimize the number of queries needed to solve the minimum spanning tree problem, a fundamental combinatorial optimization problem that has been central also to the research area of explorable uncertainty. For all integral ? ? 2, we present algorithms that are ?-robust and (1+1/?)-consistent, meaning that they use at most ?OPT queries if the predictions are arbitrarily wrong and at most (1+1/?)OPT queries if the predictions are correct, where OPT is the optimal number of queries for the given instance. Moreover, we show that this trade-off is best possible. Furthermore, we argue that a suitably defined hop distance is a useful measure for the amount of prediction error and design algorithms with performance guarantees that degrade smoothly with the hop distance. We also show that the predictions are PAC-learnable in our model. Our results demonstrate that untrusted predictions can circumvent the known lower bound of 2, without any degradation of the worst-case ratio. To obtain our results, we provide new structural insights for the minimum spanning tree problem that might be useful in the context of query-based algorithms regardless of predictions. In particular, we generalize the concept of witness sets - the key to lower-bounding the optimum - by proposing novel global witness set structures and completely new ways of adaptively using those

    Package Delivery Using Drones with Restricted Movement Areas

    Get PDF
    For the problem of delivering a package from a source node to a destination node in a graph using a set of drones, we study the setting where the movements of each drone are restricted to a certain subgraph of the given graph. We consider the objectives of minimizing the delivery time (problem DDT) and of minimizing the total energy consumption (problem DDC). For general graphs, we show a strong inapproximability result and a matching approximation algorithm for DDT as well as NP-hardness and a 2-approximation algorithm for DDC. For the special case of a path, we show that DDT is NP-hard if the drones have different speeds. For trees, we give optimal algorithms under the assumption that all drones have the same speed or the same energy consumption rate. The results for trees extend to arbitrary graphs if the subgraph of each drone is isometric

    Call Control in Rings

    Get PDF
    The call control problem is an important optimization problem encountered in the design and operation of communication networks. The goal of the call control problem in rings is to compute, for a given ring network with edge capacities and a set of paths in the ring, a maximum cardinality subset of the paths such that no edge capacity is violated. We give a polynomial-time algorithm to solve the problem optimally. The algorithm is based on a decision procedure that checks whether a solution with at least k paths exists, which is in turn implemented by an iterative greedy approach operating in rounds. We show that the algorithm can be implemented efficiently and, as a by-product, obtain a linear-time algorithm to solve the problem in chains optimally. For the weighted version of call control in rings, where each path is associated with a weight and the goal is to maximize the total weight of the paths in the solution, we present a simple 2-approximation algorithm and a polynomial-time approximation scheme. While the complexity of the weighted version remains open, we show that it is at least as hard as the bipartite exact matching problem, which has not been resolved to be in P or NP-hard. This latter result follows from recent work by Hochbaum and Levi

    Round-competitive algorithms for uncertainty problems with parallel queries

    Get PDF
    In computing with explorable uncertainty, one considers problems where the values of some input elements are uncertain, typically represented as intervals, but can be obtained using queries. Previous work has considered query minimization in the settings where queries are asked sequentially (adaptive model) or all at once (non-adaptive model). We introduce a new model where k queries can be made in parallel in each round, and the goal is to minimize the number of query rounds. Using competitive analysis, we present upper and lower bounds on the number of query rounds required by any algorithm in comparison with the optimal number of query rounds for the given instance. Given a set of uncertain elements and a family of m subsets of that set, we study the problems of sorting all m subsets and of determining the minimum value (or the minimum element(s)) of each subset. We also study the selection problem, i.e., the problem of determining the i-th smallest value and identifying all elements with that value in a given set of uncertain elements. Our results include 2-round-competitive algorithms for sorting and selection and an algorithm for the minimum value problem that uses at most (2 + ε) · optk + O 1 ε · lg m query rounds for every 0 < ε < 1, where optk is the optimal number of query round

    Sorting and Hypergraph Orientation under Uncertainty with Predictions

    Get PDF
    Learning-augmented algorithms have been attracting increasing interest, but have only recently been considered in the setting of explorable uncertainty where precise values of uncertain input elements can be obtained by a query and the goal is to minimize the number of queries needed to solve a problem. We study learning-augmented algorithms for sorting and hypergraph orientation under uncertainty, assuming access to untrusted predictions for the uncertain values. Our algorithms provide improved performance guarantees for accurate predictions while maintaining worst-case guarantees that are best possible without predictions. For sorting, our algorithm uses the optimal number of queries for accurate predictions and at most twice the optimal number for arbitrarily wrong predictions. For hypergraph orientation, for any γ ≥ 2, we give an algorithm that uses at most 1 + 1/γ times the optimal number of queries for accurate predictions and at most γ times the optimal number for arbitrarily wrong predictions. These tradeoffs are the best possible. We also consider different error metrics and show that the performance of our algorithms degrades smoothly with the prediction error in all the cases where this is possible
    corecore